Goto

Collaborating Authors

 Seminole County


Intuitive control of supernumerary robotic limbs through a tactile-encoded neural interface

Jia, Tianyu, Yang, Xingchen, McGeady, Ciaran, Li, Yifeng, Lin, Jinzhi, Ho, Kit San, Pan, Feiyu, Ji, Linhong, Li, Chong, Farina, Dario

arXiv.org Artificial Intelligence

These authors contributed equally to this work . Abstract: Brain - computer interfaces (BCIs) promise to extend human movement capabilities by enabling direct neural control of supernumerary effectors, yet integrating augmented commands with multi ple degrees of freedom without disrupting natural movement remains a k ey challenge. Here, we propose a tactile - encoded BCI that leverages sensory afferents through a novel tactile - evoked P300 paradigm, allowing intuitive and reliable decoding of supernumerary motor intentions even when superimposed with voluntary actions. The interface was evaluated in a multi - day experiment comprising of a single motor recognition task to validate baseline BCI performance and a dual task paradigm to assess the potential influence between the BCI and natural human movement . T he brain interface achieved real - time and reliable decoding of four supernumerary degrees of freedom, with significant performance improvement s after only three days of training. Importantly, after training, performance did not differ significantly b etween the single - and dual - BCI task conditions, and natural movement remained unimpaired during concurrent supernumerary control . Lastly, the interface was deployed in a movement augmentation task, demonstrating its ability to command two supernumerary robotic arms for functional assistance during bimanual tasks. These results establish a new neural interface paradigm for movement augmentation through stimulation of sensory afferents, expanding motor degrees of fr eedom without impairing natural movement . One - Sentence Summary: T actile - encoded neural interface enables intuitive control of supernumerary limbs without compromising natural human movement Main Text: INTRODUCTION Humans interact with their surroundings with remarkable dexterity and efficiency. Recent advances in robotics and neural interfaces hold the potential to increase these capabilities, enhancing human movement beyond its natural limits. Movement augmentation aims to increase the mechanical degrees of freedom (DoFs) an individual can exert over their surroundings ( 1), allowing movement tasks to be performed more efficiently or enable actions otherwise impossible with natural limbs alone, such as trimanual manipulation with a third arm ( 2) . A central challenge, however, lies in achieving practical control of supernumerary effectors (SEs) without compromising natural movement. Current strategies for augmenting DoFs often rely on augmentation by transfer, in which control of SEs is derived from the function of an existing body part, typically one that is task - irrelevant ( 1, 3, 4) .


Uncertainty-Guided Coarse-to-Fine Tumor Segmentation with Anatomy-Aware Post-Processing

Isler, Ilkin Sevgi, Mohaisen, David, Lisle, Curtis, Turgut, Damla, Bagci, Ulas

arXiv.org Artificial Intelligence

Reliable tumor segmentation in thoracic computed tomography (CT) remains challenging due to boundary ambiguity, class imbalance, and anatomical variability. We propose an uncertainty-guided, coarse-to-fine segmentation framework that combines full-volume tumor localization with refined region-of-interest (ROI) segmentation, enhanced by anatomically aware post-processing. The first-stage model generates a coarse prediction, followed by anatomically informed filtering based on lung overlap, proximity to lung surfaces, and component size. The resulting ROIs are segmented by a second-stage model trained with uncertainty-aware loss functions to improve accuracy and boundary calibration in ambiguous regions. Experiments on private and public datasets demonstrate improvements in Dice and Hausdorff scores, with fewer false positives and enhanced spatial interpretability. These results highlight the value of combining uncertainty modeling and anatomical priors in cascaded segmentation pipelines for robust and clinically meaningful tumor delineation. On the Orlando dataset, our framework improved Swin UNETR Dice from 0.4690 to 0.6447. Reduction in spurious components was strongly correlated with segmentation gains, underscoring the value of anatomically informed post-processing.


Actor Critic with Experience Replay-based automatic treatment planning for prostate cancer intensity modulated radiotherapy

Abrar, Md Mainul, Sapkota, Parvat, Sprouts, Damon, Jia, Xun, Chi, Yujie

arXiv.org Artificial Intelligence

Background: Real-time treatment planning in IMRT is challenging due to complex beam interactions. AI has improved automation, but existing models require large, high-quality datasets and lack universal applicability. Deep reinforcement learning (DRL) offers a promising alternative by mimicking human trial-and-error planning. Purpose: Develop a stochastic policy-based DRL agent for automatic treatment planning with efficient training, broad applicability, and robustness against adversarial attacks using Fast Gradient Sign Method (FGSM). Methods: Using the Actor-Critic with Experience Replay (ACER) architecture, the agent tunes treatment planning parameters (TPPs) in inverse planning. Training is based on prostate cancer IMRT cases, using dose-volume histograms (DVHs) as input. The model is trained on a single patient case, validated on two independent cases, and tested on 300+ plans across three datasets. Plan quality is assessed using ProKnow scores, and robustness is tested against adversarial attacks. Results: Despite training on a single case, the model generalizes well. Before ACER-based planning, the mean plan score was 6.20$\pm$1.84; after, 93.09% of cases achieved a perfect score of 9, with a mean of 8.93$\pm$0.27. The agent effectively prioritizes optimal TPP tuning and remains robust against adversarial attacks. Conclusions: The ACER-based DRL agent enables efficient, high-quality treatment planning in prostate cancer IMRT, demonstrating strong generalizability and robustness.


Robot camera finds alligator in Florida water pipe

BBC News

When a series of potholes appeared in the city, the town of Oviedo, Florida sent a robotic camera into a storm water pipe to investigate "anomalies" under the roadway. They were shocked to find a 5ft (1.7m) alligator. "At first, they thought it was a toad and in the video, you see two little glowing eyes until you get closer - but when it turned around, they saw the long tail of the alligator and followed it through the pipes," Oviedo city officials said. "Thank goodness our crews have a robot", officials added, warning locals not to wander down into the pipes.


Alligator spotted roaming Florida city's underground stormwater pipes with robotic camera

FOX News

Crews in Oviedo, Florida, were investigating underground pipes for anomalies when their robotic camera ran into a 5-foot alligator. A city crew in Florida spotted a 5-foot alligator lurking in a stormwater pipe while investigating the pipes with a robotic camera last week, officials said Tuesday. The stormwater crew in the city of Oviedo, located about 20 miles northeast of Orlando, was on Lockwood Boulevard to check on a series of potholes that appeared in the roadway on Friday, the city said in a Facebook post. The crew used a four-wheel robotic camera to go into the pipes below the road and investigate any anomalies such as leaking pipes, cracks or other defects underground, officials said. However, crews instead found a different kind of anomaly while searching the underground pipes.


Feature Encodings for Gradient Boosting with Automunge

Teague, Nicholas J.

arXiv.org Artificial Intelligence

Automunge is a tabular preprocessing library that encodes dataframes for supervised learning. When selecting a default feature encoding strategy for gradient boosted learning, one may consider metrics of training duration and achieved predictive performance associated with the feature representations. Automunge offers a default of binarization for categoric features and z-score normalization for numeric. The presented study sought to validate those defaults by way of benchmarking on a series of diverse data sets by encoding variations with tuned gradient boosted learning. We found that on average our chosen defaults were top performers both from a tuning duration and a model performance standpoint. Another key finding was that one hot encoding did not perform in a manner consistent with suitability to serve as a categoric default in comparison to categoric binarization.


Semi-autonomous Prosthesis Control Using Minimal Depth Information and Vibrotactile Feedback

Castro, Miguel Nobre, Dosen, Strahinja

arXiv.org Artificial Intelligence

A semi-autonomous prosthesis control based on computer vision can be used to improve performance while decreasing the cognitive burden, especially when using advanced systems with multiple functions. However, a drawback of this approach is that it relies on the complex processing of a significant amount of data (e.g., a point cloud provided by a depth sensor), which can be a challenge when deploying such a system onto an embedded prosthesis controller. In the present study, therefore, we propose a novel method to reconstruct the shape of the target object using minimal data. Specifically, four concurrent laser scanner lines provide partial contours of the object cross-section. Simple geometry is then used to reconstruct the dimensions and orientation of spherical, cylindrical and cuboid objects. The prototype system was implemented using depth sensor to simulate the scan lines and vibrotactile feedback to aid the user during aiming of the laser towards the target object. The prototype was tested on ten able-bodied volunteers who used the semi-autonomous prosthesis to grasp a set of ten objects of different shape, size and orientation. The novel prototype was compared against the benchmark system, which used the full depth data. The results showed that novel system could be used to successfully handle all the objects, and that the performance improved with training, although it was still somewhat worse compared to the benchmark. The present study is therefore an important step towards building a compact system for embedded depth sensing specialized for prosthesis grasping.


Geometric Regularization from Overparameterization

Teague, Nicholas J.

arXiv.org Artificial Intelligence

The volume of the distribution of weight sets associated with a loss value may be the source of implicit regularization from overparameterization due to the phenomenon of contracting volume with increasing dimensions for geometric figures demonstrated by hyperspheres. We introduce the geometric regularization conjecture and extract to an explanation for the double descent phenomenon by considering a similar property resulting from shrinking intrinsic dimensionality of the distribution of potential weight set updates available along training path, where if that distribution retracts across a volume verses dimensionality curve peak when approaching the global minima we could expect geometric regularization to re-emerge. We illustrate how data fidelity representational complexity may influence model capacity double descent interpolation thresholds. The existence of epoch and model capacity double descent curves originating from different geometric forms may imply universality of closed n-manifolds having dimensionally adjusted n-sphere volumetric correspondence.


An Artificial Intelligence Author Makes Its Way into Literature with a Love Story

#artificialintelligence

Is it possible to learn how to feel? A research study to validate artificial creativity will discover it. Launching in Spanish, Falta Una Palabra- translated as In Need of a Word is a novel written by Dr. Ángel García-Crespo with the help of AI. It tells the love story of Beatriz and Benito, two people looking for a word to describe the nature of their relationship. Beatriz and Benito share a dilemma that feeds their passion, they can't be together or separated.


What Went Wrong With Zillow? A Real-Estate Algorithm Derailed Its Big Bet

WSJ.com: WSJD - Technology

The first quarter delivered home-sale profits that were more than twice as high as anticipated, the company said. Zillow expected to make money primarily from transaction fees and from services such as title insurance--not from making a killing on the flip. The company's algorithm, which was supposed to predict housing prices, didn't seem to understand the market. Zillow was also behind on its target for home purchases. By the summer, it had the opposite problem, the company later acknowledged.